Technical Sessions

Session Session-6

Edge Computing

Conference
10:00 AM — 12:00 PM CST
Local
Jul 28 Wed, 10:00 PM — 12:00 AM EDT

Task Offloading with Uncertain Processing Cycles

Shaoran Li (Virginia Tech, USA), Chengzhang Li (Virginia Tech, USA), Yan Huang (Virginia Tech, USA), Brian A. Jalaian (U.S. Army Research Laboratory, USA), Y. Thomas Hou (Virginia Tech, USA), Wenjing Lou (Virginia Tech, USA)

0
Mobile Edge Computing (MEC) has emerged to be an integral component of 5G infrastructure due to its potential to speed up task processing and reduce energy consumption for mobile devices. However, a major technical challenge in making offloading decisions is that the number of required processing cycles of a task is usually unknown in advance. Due to this processing uncertainty, it is difficult to make offloading decisions while providing any guarantee on task deadlines. To address this challenge, we propose EPD$-$Energy-minimized solution with Probabilistic Deadline guarantee to task offloading problem. The mathematical foundation of EPD is *Exact Conic Reformulation* (ECR), which is a powerful tool that reformulates a probabilistic constraint for task deadline into a deterministic one. In the absence of distribution knowledge of processing cycles, we use the estimated mean and variance of processing cycles and exploit ECR to the fullest extent in the design of EPD. Simulation results show that EPD successfully guarantees the probabilistic deadlines while minimizing the energy consumption of mobile users, and can achieve significant improvement in energy saving when compared to a state-of-the-art approach.

Inexact-ADMM Based Federated Meta-Learning for Fast and Continual Edge Learning

Sheng Yue (Central South University, China, Arizona State University, USA), Ju Ren (Central South University, China), Jiang Xin (Central South University, China), Sen Lin (Arizona State University, USA), Junshan Zhang (Arizona State University, USA)

0
In order to meet the requirements for performance, safety, and latency in many IoT applications, intelligent decisions must be made right here right now at the network edge. However, the constrained resources and limited local data amount pose significant challenges to the development of edge AI. To overcome these challenges, we explore continual edge learning capable of leveraging the knowledge transfer from previous tasks. Aiming to achieve fast and continual edge learning, we propose a platform-aided federated meta-learning architecture where edge nodes collaboratively learn a meta-model, aided by the knowledge transfer from prior tasks. The edge learning problem is cast as a regularized optimization problem, where the valuable knowledge learned from previous tasks is extracted as regularization. Then, we devise an ADMM based federated meta-learning algorithm, namely ADMM-FedMeta, where ADMM offers a natural mechanism to decompose the original problem into many subproblems which can be solved in parallel across edge nodes and the platform. Further, a variant of inexact-ADMM method is employed where the subproblems are `solved' via linear approximation as well as Hessian estimation to reduce the computational cost per round to $\mathcal{O}(n)$. We provide a comprehensive analysis of ADMM-FedMeta, in terms of the convergence properties, the rapid adaptation performance, and the forgetting effect of prior knowledge transfer, for the general non-convex case. Extensive experimental studies demonstrate the effectiveness and efficiency of ADMM-FedMeta, and showcase that it substantially outperforms the existing baselines.

An Online Mean Field Approach for Hybrid Edge Server Provision

Zhiyuan Wang (The Chinese University of Hong Kong, China), Jiancheng Ye (Network Technology Lab, Huawei Technologies Co., Ltd.), John C.S. Lui (The Chinese University of Hong Kong, China)

0
The performance of an edge computing system primarily depends on the edge server provision mode, the task migration scheme, and the computing resource configuration. This paper studies how to perform dynamic resource configuration for hybrid edge server provision under two decentralized task migration schemes. We formulate the dynamic resource configuration as a multi-period online cost minimization problem, aiming to jointly minimize the performance degradation (i.e., execution latency) and the operation expenditure. Due to the stochastic nature, one can only observe the system performance for the currently installed configuration, which is also known as the partial feedback. To overcome this challenge, we derive a deterministic mean field model to approximate the large-scale stochastic edge computing system. We then propose an online mean field aided resource configuration policy, and show that the proposed policy performs asymptotically as good as the offline optimal configuration. Numerical results show that the mean field model can significantly improve the convergence speed in the online resource configuration problem. Moreover, our proposed policy under the two decentralized task migration schemes considerably reduces the operating cost (by 23%) and incurs little communication overhead.

Joint Update Rate Adaptation in Multiplayer Cloud-Edge Gaming Services: Spatial Geometry and Performance Tradeoffs

Saadallah Kassir (The University of Texas at Austin, USA), Gustavo de Veciana (The University of Texas at Austin, USA), Nannan Wang (Fujitsu Network Communications, USA), Xi Wang (Fujitsu Network Communications, USA), Paparao Palacharla (Fujitsu Network Communications, USA)

0
In this paper, we analyze the performance of Multiplayer Cloud Gaming (MCG) systems. To that end, we introduce a model and new MCG-Quality of Service (QoS) metric that captures the freshness of the players' updates and fairness in their gaming experience. We introduce an efficient measurement-based Joint Multiplayer Rate Adaptation (JMRA) algorithm that optimizes the MCG-QoS by overcoming large (possibly varying) network transport delays by increasing the associated players' update rates. The resulting MCG-QoS is shown to be Schur-concave in the network delays, leading to natural characterizations and performance comparisons associated with the players' spatial geometry and network congestion. In particular, joint rate adaptation enables service providers to combat variability in network delays and players' geographic spread to achieve high service coverage. This, in turn, allows us to explore the spatial density and capacity of compute resources that need to be provisioned. Finally, we leverage tools from majorization theory, to show how service placement decisions can be made to improve the robustness of the MCG-QoS to stochastic network delays.

Session Chair

Shizhen Zhao (SJTU)

Session Session-7

Emerging Topics

Conference
1:30 PM — 3:00 PM CST
Local
Jul 29 Thu, 1:30 AM — 3:00 AM EDT

SkyHaul: An Autonomous Gigabit Network Fabric In The Sky

Ramanujan K Sheshadri (NEC Laboratories America, USA), Eugene Chai (NEC Laboratories America, USA), Karthikeyan Sundaresan (NEC Laboratories America, USA), Sampath Rangarajan (NEC Laboratories America, USA)

0
We design and build SKYHAUL, the first large-scale, self-organizing network of unmanned aerial vehicles (UAVs) that are connected using a mmWave wireless mesh backhaul. While the use of a mmWave backhaul paves the way for a new class of bandwidth-intensive, latency-sensitive cooperative applications (e.g. LTE coverage during disasters), the network of UAVs allows these applications to be executed at operating ranges that are far beyond the line-of-sight distances that limit individual UAVs today. To realize the challenging vision of deploying and maintaining an airborne, mmWave mesh backhaul to cater to dynamic applications/events, SKYHAUL��s design incorporates various elements: (i) Role-specific UAV operations that simultaneously address application tracking and backhaul connectivity (ii) Novel algorithms to jointly address the problem of deployment (position, yaw of UAVs) and traffic routing across the UAV network; and (iii)A provably optimal solution for fast and safe reconfiguration of UAV backhaul during application dynamics. We implement SKYHAUL on four DJI Matrice 600 Pros to demonstrate its practicality and performance through autonomous flight operations, complemented by large scale simulations.

Crowdfunding with Strategic Pricing and Information Disclosure

Qi Shao (The Chinese University of Hong Kong, China), Man Hon Cheung (City University of Hong Kong, China), Jianwei Huang ( (The Chinese University of Hong Kong, Shenzhen, China)

0
The crowdfunding industry is expected to reach a volume of $90 billion per year. In crowdfunding, a creator needs to decide not only the pricing but also when and how frequent to disclose the campaign progress to the contributors, in order to maximize the project revenue. In this paper, we present a first analytical study on how the creator's pricing and information disclosure strategies affect the potential contributors' belief update process, hence the project success and creator's expected revenue. Specifically, we consider a multi-stage crowdfunding model, where a stage corresponds to the period between the creator's two information disclosures. At the beginning of the campaign, a creator announces her pricing decision and information disclosure strategy for revenue maximization. Then contributors coming in each following stage will choose whether to contribute, based on not only the disclosed pledging status so far but also the estimation of the impact of their decisions on later contributors. Such a model is challenging to optimize because of the coupling across multiple stages, especially with contributors' anticipations of future stages. Nevertheless, we are able to characterize the contributors' threshold-based equilibrium pledging decisions, and we incorporate such a structural result into the creator's mixed-integer revenue maximization problem. Through both analytical and numerical studies, we show that the contributors' prior belief of high-valuation contributor percentage plays a critical role in the creator's optimal strategic information disclosure decisions. When the contributors have a high prior belief, a creator should not announce the pledging history until all the contributors have made their pledging decisions. When the prior belief is low, the creator should disclose more often.

uScope: a Tool for Network Managers to Validate Delay-Based SLAs

Peshal Nayak (Samsung Research America, USA), Edward W. Knightly (Rice University, USA)

0
This paper presents uScope, an AP-side framework for validation of delay-based SLAs. Specifically, uScope enables in the estimation of WLAN uplink latency for any of the associated STAs and decomposition into its constituent components. uScope does not require any form of active probing, no special purpose software installations on the STAs, nor any additional infrastructure to collect more information, and makes estimations solely based on passive AP-side observations. We implement uScope on a commodity hardware platform and conduct extensive field trials on a university campus and in a residential apartment complex. In over 1 million tests, uScope demonstrates a high estimation accuracy with mean estimation errors under 10% for all the estimated parameters.

Session Chair

Zhida Qin (BIT)

Session Session-8

Learning for Wireless Networks

Conference
3:30 PM — 5:30 PM CST
Local
Jul 29 Thu, 3:30 AM — 5:30 AM EDT

DeepBeam: Deep Waveform Learning for Coordination-Free Beam Management in mmWave Networks

Michele Polese (Northeastern University, USA), Francesco Restuccia (Northeastern University, USA), Tommaso Melodia (Northeastern University, USA)

0
Highly directional millimeter wave (mmWave) radios need to perform beam management to establish and maintain reliable links. To achieve this objective, existing solutions mostly rely on explicit coordination between the transmitter (TX) and the receiver (RX), which significantly reduces the airtime available for communication and further complicates the network protocol design. This paper advances the state of the art by presenting DeepBeam, a framework for beam management that does not require pilot sequences from the TX, nor any beam sweeping or synchronization from the RX. This is achieved by inferring (i) the Angle of Arrival (AoA) of the beam and (ii) the actual beam being used by the transmitter through waveform-level deep learning on ongoing transmissions between the TX to other receivers. In this way, the RX can associate Signal-to-Noise-Ratio (SNR) levels to beams without explicit coordination with the TX. This is possible because different beam patterns introduce different ��impairments�� to the waveform, which can be subsequently learned by a convolutional neural network (CNN). To demonstrate the generality of DeepBeam, we conduct an extensive experimental data collection campaign where we collect more than 4 TB of mmWave waveforms with (i) 4 phased array antennas at 60.48 GHz, (ii) 2 codebooks containing 24 one-dimensional beams and 12 two-dimensional beams; (iii) 3 receiver gains; (iv) 3 different AoAs; (v) multiple TX and RX locations. Moreover, we collect waveform data with two custom-designed mmWave software-defined radios with fully-digital beamforming architectures at 58 GHz. We also implement our learning models in FPGA to evaluate latency performance. Results show that DeepBeam (i) achieves accuracy of up to 96%, 84% and 77% with a 5-beam, 12-beam and 24-beam codebook, respectively; (ii) reduces latency by up to 7x with respect to the 5G NR initial beam sweep in a default configuration and with a 12-beam codebook. The waveform dataset and the full DeepBeam code repository are publicly available.

MmWave Codebook Selection in Rapidly-Varying Channels via Multinomial Thompson Sampling

Yi Zhang (The University of Texas at Austin, USA), Soumya Basu (Mountain View, CA, USA), Sanjay Shakkottai (The University of Texas at Austin, USA), Robert W. Heath Jr.(North Carolina State University, USA)

0
Millimeter-wave (mmWave) communications, using directional beams, is a key enabler for high-throughput mobile ad hoc networks. These directional beams are organized into multiple codebooks according to beam resolution, with each codebook consisting of a set of equal-width beams that cover the whole angular space. The codebook with narrow beams delivers high throughput, at the expense of scanning time. Therefore overall throughput maximization is achieved by selecting a mmWave codebook that balances between beamwidth (beamforming gain) and beam alignment overhead. Further, these codebooks have some potential natural structures such as the non-decreasing instantaneous rate or the unimodal throughput as one traverses from the codebook with wide beams to the one with narrow beams. We study the codebook selection problem through a multi-armed bandit (MAB) formulation in mmWave networks with rapidly-varying channels. We develop multiple novel Thompson Sampling-based algorithms for our setting given different codebook structures with theoretical guarantees on regret. We further collect real-world (60 GHz) measurements with 12-antenna phased arrays, and show the performance benefits of our approaches in an IEEE 802.11ad/ay emulation setting.

Neuro-DCF: Design of Wireless MAC via Multi-Agent Reinforcement Learning Approach

Sangwoo Moon (Korea Advanced Institute of Science and Technology, South Korea), Sumyeong Ahn(Korea Advanced Institute of Science and Technology, South Korea), Kyunghwan Son(Korea Advanced Institute of Science and Technology, South Korea), Jinwoo Park(Korea Advanced Institute of Science and Technology, South Korea), Yung Yi(Korea Advanced Institute of Science and Technology, South Korea)

0
The carrier sense multiple access (CSMA) algorithm has been used in the wireless medium access control (MAC) under standard 802.11 implementation due to its simplicity and generality. An extensive body of research on CSMA has long been made not only in the context of practical protocols, but also in a distributed way of optimal MAC scheduling. However, the current state-of-the-art CSMA (or its extensions) still suffers from poor performance, especially in multi-hop scenarios, and often requires patch-based solutions rather than a universal solution. In this paper, we propose an algorithm which adopts an \emph{experience-driven} approach and train CSMA-based wireless MAC by using deep reinforcement learning. We name our protocol, Neuro-DCF. Two key challenges are: (i) a stable training method for distributed execution and (ii) a unified training method for embracing various interference patterns and configurations. For (i), we adopt a \emph{multi-agent} reinforcement learning framework, and for (ii) we introduce a novel graph neural network (GNN) based training structure. We provide extensive simulation results which demonstrate that our protocol, Neuro-DCF, significantly outperforms 802.11 DCF and O-DCF, a recent theory-based MAC protocol, especially in terms of improving delay performance while preserving optimal utility. We believe our multi-agent reinforcement learning based approach would get broad interest from other learning-based network controllers in different layers that require distributed operation.

Weak Signal Detection in 5G+ Systems: A Distributed Deep Learning Framework

Yifan Guo (Case Western Reserve University, USA), Lixing Yu (Towson University, USA), Qianlong Wang, Tianxi Ji (Case Western Reserve University, USA), Yuguang Fang (The University of Florida, USA), Jin Wei-Kocsis (Purdue University, USA), Pan Li (Case Western Reserve University, USA)

0
Internet connected mobile devices in 5G and beyond (simply 5G+) systems are penetrating all aspects of people's daily life, transforming the way we conduct business and live. However, this rising trend has also posed unprecedented traffic burden on existing telecommunication infrastructure including cellular systems, consistently causing network congestion. Although additional spectrum resources have been allocated, exponentially increasing traffic tends to always outpace the added capacity. In order to increase the data rate and reduce the latency, 5G+ systems have heavily relied on hyperdensification and higher frequency bands, resulting in dramatically increased interference temperature, and consequently significantly more weak signals (i.e., signals with low Signal-to-Noise-plus-Interference (SINR) ratio). With traditional detection mechanisms, a large number of weak signals will not be detected, and hence be wasted, leading to poor throughput in 5G+ systems.
To this end, in this paper, we develop an online weak-signal detection scheme to recover weak signals for mobile users so as to significantly boost their data rates without adding additional spectrum resource. Specifically, we first formulate weak signal detection as a high dimensional user-time signal matrix factorization problem and solve this problem by devising a novel learning model, called Dual-CNN Deep Matrix Factorization (DCDMF). Then, we design an online distributed learning framework to collaboratively train and update our proposed DCDMF model between an edge network and mobile users, with correctly decoded signals at users only. By conducting simulations on real-world traffic datasets, we demonstrate that our proposed weak signal detection scheme can achieve throughput gain of up to 3.12 times, with computing latency of 0.84 ms per KB signals on average.

Session Chair

Pan Li (Case Western Reserve University)

Made with in Toronto · Privacy Policy · MobiHoc 2020 · © 2021 Duetone Corp.